Programmable Graphics Processing Units for Urban Landscape Visualization
نویسنده
چکیده
The availability of modern graphics cards allows high-end visualization by reasonably priced standard hardware. Real-time visualization of virtual 3D city models can therefore be provided area covering. Since also complex algorithms can be directly implemented in modern programmable graphics cards, their great computational power can be made available for a growing number of applications. As an example, techniques from photogrammetry and computer graphics are combined by a direct texture mapping of building façades from geo-referenced terrestrial images. In a second example, the graphics pipeline is adapted for the real-time simulation of SAR imagery by the modification of the imaging geometry from visual light to RADAR beams. 10 meter for the inner city region and 30 meter for the surrounding area. The included building models are provided by the City Surveying Office of Stuttgart. They were photogrammetrically reconstructed in a semi-automatic process. For data capturing, the building ground plans from the public Automated Real Estate Map (ALK) and the 3D shapes measured from aerial images were used (Wolf 1999). The resulting wireframe model contains the geometry of 36,000 buildings covering an area of 25 km2, meaning that almost every building of the city and its suburbs is included. The overall complexity of all the building models amounts to 1.5 million triangles. In addition to the majority of relatively simple building models, some prominent buildings like the historical New Palace of Stuttgart are represented by 3,000 (and more) triangles. Figure 1. 3D city model of Stuttgart with terrestrially captured façade textures. Airborne data collection efficiently provides complete sets of 3D building models at sufficient detail and accuracy for a number of applications. However, due to the viewpoint restrictions of airborne platforms, the collected building geometry mainly represents the footprints and the roof shapes, while information for the facades of the buildings is missing. Thus, in order to allow for high quality visualizations from pedestrian viewpoints, texture mapping based on terrestrial images is usually applied, additionally. To improve the visual appearance, the façade textures of 1,000 buildings located in the main pedestrian area were captured. For this purpose, approximately 8,000 ground based close-up photographs of the building façades were taken using a standard digital camera. The textures were extracted from the images, perspectively corrected, rectified and manually mapped on the corresponding planar façade segments. We managed to process the aforementioned 1,000 buildings in roughly 30 man-months. Because of the large size of the original texture dataset, we had to down-sample the textures to a resolution of approximately 15 centimeters per pixel. Buildings with no real captured façade textures were finally colored randomly with different colors for the façade and the roof. An exemplary visualization of this data set is given in Figure 1. 1.2 Real-time Visualization The visualization of this 3D city model data set was implemented using the terrain rendering library libMini which is licensed under an open source license. The approach that is realized in the library recursively generates triangle fans from a view-dependent quad-tree structured triangulation. It is very easy to integrate the library in other software packages as the API is simple to use. To suppress popping artifacts that can be otherwise experienced because of changes in geometry, a technique called geomorphing is applied which slowly moves newly introduced vertices from a position on the terrain to its final position. The 3D building models are preprocessed for visualization in order to avoid unnecessary state changes in the rendering pipeline. All buildings are pre-transformed to lie in the same coordinate system and buildings without textures are grouped together to form larger data units. Primitives are rendered as indexed vertices in a triangle list. As the position of the global light source does usually not change, we omitted the normal vector for each vertex and pre-lit all vertices. The memory requirement of the vector data could be reduced this way by 42%. The performance analysis of the visualization application has been conducted on a standard PC equipped with a 3.0 GHz Intel Pentium 4 processor, 1 GB RAM and an ATI X800 Pro compliant graphics card. In previous work we had used the impostor technique to accelerate the rendering of the building models (Kada et al 2003). As performance has considerably increased with the latest hardware generations, we felt that the speed-up of the impostor approach does not justify its disadvantages (especially the occasional frame rate drops) anymore. Instead we preprocessed the data so that most of the models and textures were guaranteed to remain on the graphics memory. The time extensive paging of data in and out of dedicated graphics memory was consequently minimized. Running at a screen resolution of 1280*1024 the application almost reaches real-time performance, meaning that approximately 15 to 20 frames per second are rendered. 2 PROGRAMMABLE GRAPICS HARDWARE FOR DIRECT TEXTURE MAPPING During visualization, the façade texture is provided by linking the 3D object coordinates of the building models to the corresponding coordinates of the texture images. The required world to image transform can be provided easily, if the exterior orientation and the camera parameters are available for the terrestrial images. However, if these images are directly used, the quality of visual realism is usually limited. This results from the fact that standard texture mapping only allows for simple transformations, while complex geometric image transformations to model perspective rectification or lens distortion are not available. Thus, these effects are usually eliminated before texture mapping by the generation of ‘ideal’ images, which are then used as an input for rendering. Additional pre-processing steps are required for the definition of occluded parts or the selection of optimal texture if multiple images are available. Alternatively, these tasks can be processed on-the-fly by programmable graphics hardware. Such an implementation can additionally help to solve problems resulting from self-occlusions to integrate multiple images during texture mapping. By these means, the façade texture is directly extracted from the original images; no intermediate images have to be generated and stored. Thus, within the whole process image pixels are interpolated only once, which results in façade textures of higher quality. Since our implementation for the on-the-fly generation of façade texture is based on technologies that can be found in today’s commodity 3D graphics hardware, thus the computational power of such devices can be used efficiently. Graphics processing units (GPU) that are integrated in modern graphics cards are optimized for the transformation of vertices and the processing of pixel data. As they have evolved from a fixed function to a programmable pipeline design, they can now be utilized for various fields of applications. The programs that are executed on the hardware are called shaders. They can be implemented using high level programming languages like HLSL (developed by Microsoft) (Gray 2003) or C for graphics (developed by NVIDIA) (Fernando 2003). In our approach shaders are used to realize specialized projective texture lookups, depth buffer algorithms and an on-the-fly removal of lens distortions for calibrated cameras. This approach can be implemented based on the graphics API Direct3D 9.0 which defines dynamic flow control in Pixel Shader 3.0. By these means, the lens distortion in the images can be corrected on-the-fly in the pixel shader. 2.1 Texture Extraction and Placement Our approach uses the graphics rendering pipeline of the graphics card to generate quadrilateral texture images. In general, the function of the pipeline is to render a visualization of a scene from a given viewpoint based on three-dimensional objects, textures and light sources. Because the texture images, which are mapped against the façades during visualization, have to be represented by quadrilaterals, the polygons of the building are substituted by their bounding rectangles during the extraction process (see Figure 2 left). For these bounding rectangles 3D world coordinates are available. This information is used to calculate the corresponding image pixels, which provide the required façade texture. In more detail, the first step is to set up the graphics rendering pipeline to draw the entire target pixel buffer of the final façade texture. For this purpose, the transformation matrices are initialized with the identity, so that drawing a unit square will render all pixels in the target buffer as wanted. As no color information is provided yet, a photograph must be assigned to the pipeline as an input texture from where to take the color information from. As mentioned above, the polygon’s projected bounding box defines the pixels to be extracted from the input texture. So in addition to the vertices, the texture coordinates of the four vertices of the unit square are specified as the four-element (homogenous) world space coordinates of the bounding box. Setting the texture transformation matrix with the aforementioned transformation from world to image space concludes the initialization. Figure 2: Projected 3D building model overlaid on the input photograph During rendering, the rasterizer of the GPU linearly interpolates the four-dimensional texture coordinates across the quadrilateral. A perspective texture lookup in the pixel shader results in the perspectively correct façade texture (see Figure 2 right). After the extraction, the textures need to be placed on the corresponding polygons. In order to find the two-dimensional texture coordinates for the polygon vertices, a function identical to glTexGen ((Shreiner, Woo, Neider 2003)) of OpenGL is used.
منابع مشابه
Real-Time SAR Simulation on Graphics Processing Units
SAR simulators usually apply the ray-tracing approach. Ray-tracing, which is also used for virtual image generation, is based on accurate physical models, but is unfortunately rather computational time intensive. Because of this, real-time applications, like interactive visualisation, in general use the rasterization method. Rasterization is less complex to calculate and is therefore faster. Th...
متن کاملReal-time Sar Simulation of Complex Scenes Using Programmable Graphics Processing Units
The usability of SAR simulations is often limited by the long processing times of traditional SAR simulators which apply the ray tracing approach. Ray tracing, which is also used for virtual image generation, is based on accurate physical models, but is unfortunately rather computational time intensive. Because of this, real-time applications, like interactive visualisation, in general use the ...
متن کاملInvestigating the Effects of Hardware Parameters on Power Consumptions in SPMV Algorithms on Graphics Processing Units (GPUs)
Although Sparse matrix-vector multiplication (SPMVs) algorithms are simple, they include important parts of Linear Algebra algorithms in Mathematics and Physics areas. As these algorithms can be run in parallel, Graphics Processing Units (GPUs) has been considered as one of the best candidates to run these algorithms. In the recent years, power consumption has been considered as one of the metr...
متن کاملVragments - Relocatability as an Extension to Programmable Rasterization Hardware
We propose an extension to the pixel pipeline of current programmable rasterization hardware to include the possibility to freely locate the rasterization position of fragments and pixels. The corresponding new primitive name is Vragment (variable fragment). We show how this new functionality could lead to new and wider classes of algorithms in computer graphics, especially in image processing,...
متن کاملUsing Video Gaming Technology to Achieve Low - cost Speed up of Emergency Response Urban Dispersion Simulations
As a result of the demand for high performance graphics capabilities driven by the computer video game industry, the processing performance of video cards is rapidly evolving. Recent trends in computing have shifted toward multi-core processors and programmable graphics processors equipped with highly parallel data paths for processing geometry and pixels. Multi-core machines are now readily av...
متن کاملVisualization-based Decision Tool for Urban Meteorological Modeling
We present a visualization-based decision tool that enables exploring the link between urban land use and urban weather, in particular predicting and visualizing changes in urban temperature, precipitation, and humidity. Our work combines recent work from urban planning, weather and climate studies, and visualization and computer graphics. Our approach uses an interactive tool to quickly and au...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007